17 research outputs found

    Investigation of the effect of articulatory-based second language production learning on speech perception

    No full text
    International audienceThe effect of second language production training on perception has been previously explored, but it remains unclear whether such training by itself influences the perception of speech sounds. In previous work participants heard the correct pronunciation of the target while simultaneously undergoing production training, making it unclear what component of improvement was due to the production training alone. In the current study we have therefore modified our electromagnetic articulometer-based training system, which provides estimates of learner-specific head-corrected tongue positions for a target utterance in real time, to eliminate simultaneous presentation of audio stimuli. Japanese learners of the American English vowel /ae/ performed ABX perceptual testing on this vowel before and after the visually presented articulatory-based pronunciation training. We examined whether or not the production-driven pronunciation improvement also induces a change in the perception of the second language sounds

    A real-time articulatory visual feedback approach with target presentation for second language pronunciation learning

    No full text
    International audienceArticulatory information can support learning or remediating pronunciation of a second language (L2). This paper describes an electromagnetic articulometer-based visual-feedback approach using an articulatory target presented in real-time to facilitate L2 pronunciation learning. This approach trains learners to adjust articulatory positions to match targets for a L2 vowel estimated from productions of vowels that overlap in both L1 and L2. Training of Japanese learners for the American English vowel /ae/ that included visual training improved its pronunciation regardless of whether audio training was also included. Articulatory visual feedback is shown to be an effective method for facilitating L2 pronunciation learning

    Neural Network Model of Context-Dependent Neuronal Activity in Inferotemporal Cortex

    No full text
    Abstract — Neuronal activities related to context-dependent recall have been found in the monkey inferotemporal cortex. If we set the same task for an artificial neural network, however, a serious computational difficulty arises. In the present paper, we overcome this difficulty by implementing a novel method of contextual modulation, termed selective desensitization, and construct a neural network model that performs the same context-dependent memory task as that assigned to the monkey. The model, being consistent with the anatomical structure of the inferotemporal lobe, as well as with physiological findings, not only reproduces the empirical data well but also gives a clear account for a phenomenon that had not been explicable to date. This strongly suggests that the brain implements contextdependent recall based on the same principle as adopted in the model. I

    Influences of transformed auditory feedback with first three formant frequencies

    Get PDF
    Auditory feedback is one of important roles from speech production to perception. It also directly affects speech production under several feedback situations, such as noise environment, delayed auditory feedback (DAF) and transformed auditory feedback (TAF). Previous investigations have shown that compensation is the main response to TAF with the voice features, like fundamental frequency or formant frequencies (F1 and F2). However,human response to the perturbations of the first three formant frequencies (F1, F2 and F3) is still indistinct. Therefore, the purpose of the current study is to examine the influence of TAF with F1, F2 and F3 on speech production. The results obtained from 9 subjects showed that the average latency response to TAF was presented within 140 ms. Moreover, the major response was the following response which was quite different from that ofthe previous researches, compensation. Consequently, the reason induced these two responses to TAF needs to be clarified in the coming study

    Articulatory Characteristics of Expressive Speech in Activation-Evaluation Space

    Get PDF
    In this study, influences of articulatory movement patterns related Activation and Evaluation dimensions of emotions are investigated. Two professional actors are asked to utter five types of emotional speech with three degree. These emotional types were selected by Activation-Evaluation space. Articulatory data and sound waves were collected using the electromagnetic articulatograph (EMA) simultaneously. The results show that the jaw is raised for Joy or Anger, and, lips are tinned for Joy. In addition, tongue dorsum goes to back for joy. These results suggest that the jaw position affects Activation dimension, and lips and/or tongue positions affect Evaluation dimension

    Articulation, Acoustics and Perception of Mandarin Chinese Emotional Speech

    No full text
    This paper studies articulatory, acoustic and perceptual characteristics of Mandarin Chinese emotional utterances as produced by two speakers, expressing Neutral, Angry, Sad and Happy emotions. Articulatory patterns were recorded using ElectroMagnetic Articulography (EMA), together with acoustic recordings. The acoustic and articulatory analysis revealed that Happy and Angry were generally higherpitched, louder, and produced with a more open mouth than Neutral or Sad. Sad is produced with low back tongue dorsum position and Happy, with a forward position, and for one speaker, duration was longer for Angry and Sad. Moreover, F1 and F2 are more dispersed (i.e., hyperarticulated) in emotional speech than Neutral speech. Perception tests conducted with 18 native listeners suggest that listeners were able to perceive the expressed emotions far above chance level. The louder and higher pitched the utterance, the more emotional the speech tends to be perceived. We also explore specific articulatory and acoustic correlates of each type of emotional speech, and how they impact perception
    corecore